2,137 research outputs found

    Network Neutrality or Internet Innovation?

    Get PDF
    Over the past two decades, the Internet has undergone an extensive re-ordering of its topology that has resulted in increased variation in the price and quality of its services. Innovations such as private peering, multihoming, secondary peering, server farms, and content delivery networks have caused the Internet’s traditionally hierarchical architecture to be replaced by one that is more heterogeneous. Relatedly, network providers have begun to employ an increasingly varied array of business arrangements and pricing. This variation has been interpreted by some as network providers attempting to promote their self interest at the expense of the public. In fact, these changes reflect network providers’ attempts to reduce cost, manage congestion, and maintain quality of service. Current policy proposals to constrain this variation risk harming these beneficial developments.

    The Convergence of Broadcasting and Telephony: Legal and Regulatory Implications

    Get PDF
    This article, written for the inaugural issue of a new journal, analyzes the extent to which the convergence of broadcasting and telephony induced by the digitization of communications technologies is forcing policymakers to rethink their basic approach to regulating these industries. Now that voice and video are becoming available through every transmission technology, policymakers can no longer define the scope of regulatory obligations in terms of the mode of transmission. In addition, jurisdictions that employ separate agencies to regulate broadcasting and telephony must reform their institutional structures to bring both within the ambit of a single regulatory agency. The emergence of intermodal competition will also place pressure on both telephone-style regulation, which protects against monopoly pricing and vertical exclusion, as well as broadcast-style regulation, which focuses on content and ownership structure. It will also force regulators to rethink social policies such as universal service and public broadcasting. At the same time, it is possible that convergence will be incomplete and that end users will maintain more than one network connection, which would reduce the danger of anticompetitive activity and allow policymakers to stop short of forcing every connection to be everything to everyone. Lastly, the increase in traffic volumes associated with the advent of Internet video may require the deployment of multicast protocols, content delivery networks, and more aggressive traffic management, all of which potentially implicate the debate over network neutrality currently taking place in the U.S. This article was published in Communications & Convergence Review 2009, vol. 1, no. 1, pp. 44-55.

    Technologies of Control and the Future of the First Amendment

    Get PDF
    The technological context surrounding the Supreme Court’s landmark decision in FCC v. Pacifica Foundation allowed the Court to gloss over the tension between two rather disparate rationales. Those adopting a civil libertarian view of free speech could support the decision on the grounds that viewers’ and listeners’ inability to filter out unwanted speech exposed them to content that they did not wish to see or hear. At the same time, Pacifica also found support from those who more paternalistically regard indecency as low value (if not socially harmful) speech that is unworthy of full First Amendment protection. The arrival of filtering technologies has introduced a wedge between those who supported the constitutionality of indecency regulations out of a desire to enhance individual autonomy and those who wish to restrict speech in order to promote a particular vision of the public good. At the same time, commentators on the political left have begun to question whether continued support for the classic liberal vision of free speech may be interfering with the advancement of progressive values. This Article offers a qualified defense of the libertarian vision of free speech. Deviating from the civil libertarian view would require a revolution in doctrine and would contradict the postulate of independent moral agency that lies at the heart of liberal theory. Although some suggested institutions for ascertaining the idealized preferences that individuals ought to have could justify allowing the government to override individuals’ actual preferences, such an approach is all-too reminiscent of the Rousseauian notion of being “forced to be free” and has never been accepted by the Supreme Court. Finally, claims that private censorship presents risks commensurate with public censorship fail to address the fact that liberal theory presupposes the existence of a private sphere into which the state cannot intrude, as well as the long tradition recognizing the special dangers associated with the coercive power of the state. Moreover, the rationales upon which the Supreme Court has relied to justify overriding individual preferences in broadcasting and cable have been undermined by technological change

    Copyright and Product Differentiation

    Get PDF

    Modularity Theory and Internet Regulation

    Get PDF
    Modularity is often cited as one of the foundations for the Internet’s success. Unfortunately, academic discussions about modularity appearing in the literature on Internet policy are undertheorized. The persistence of nonmodular architectures for some technologies underscores the need for some theoretical basis for determining when modularity is the preferred approach. Even when modularity is desirable, theory must provide some basis for making key design decisions, such as the number of modules, the location of the interfaces between the modules, and the information included in those interfaces. The literature on innovation indicates that modules should be determined by the nature of task interdependencies and the variety inherent in the external environment. Moreover, modularity designs interfaces to ensure that modules operate independently, with all information about processes that adjacent modules should not take into account being hidden within the module. These insights in turn offer a number of important implications. They mark a return to a more technological vision of vertical integration that deviates from the transaction-cost oriented vision that now dominates the literature. They also reveal how modularity necessarily limits the functionality of any particular architecture. In addition, although the independence fostered by modularity remains one of its primary virtues, it can also create coordination problems in which actors operating within each module optimize based on local conditions in ways that can lead to suboptimal outcomes for the system as a whole. Lastly, like any design hierarchy, modular systems can resist technological change. These insights shed new light on unbundling of telecommunications networks, network neutrality, calls for open APIs, and clean-slate redesign proposals

    Introduction, in CRITICAL CONCEPTS IN INTELLECTUAL PROPERTY LAW: COPYRIGHT

    Get PDF
    The two-volume set entitled Critical Concepts in Intellectual Property Law: Copyright brings together a thought-provoking collection of landmark and recent scholarship on copyright. Section 1 of Volume I focuses on the history of copyright, with Tyler Ochoa and Mark Rose providing an example of the prevailing interpretation of the history and articles by Thomas Nachbar and by William Treanor and Paul Schwartz offering fresh takes on the early English and American experiences. Section 2 focuses on copyright’s philosophical foundations, framed by the work of Justin Hughes and followed by revisionist perspectives on Lockean and Hegelian theory offered by Seana Shiffrin and Jeanne Schroeder. Section 3 focuses on democratic theories, presented through the work of Neil Netanel and a critique of Netanel authored by Shyamkrishna Balganesh. Volume II examines the economics of copyright. Section 1 of Volume 2 covers public goods economics, monopoly theory, and price discrimination, introducing the concepts through the work of Terry Fisher before turning to Michael Meurer’s and Christopher Yoo’s critiques and extensions of the conventional wisdom. Section 2 focuses on transaction cost economics, beginning with Wendy Gordon’s classic article on fair use, followed by Robert Merges’s celebrated study of how holders of intellectual property rights can contract into liability rules, Abraham Bell and Gideon Parchomovsky’s analysis of the forces that can drive a legal regime from open access property to private property and vice versa, and Clarisa Long’s discussion of the way that intellectual property can minimize information costs. Section 3 explores the political economy of copyright, including Jessica Litman’s analysis of the political dynamics surrounding the Copyright Act of 1976, Thomas Nachbar’s review of Noah Webster’s campaign to establish copyright in the colonies and the early United States, and Robert Merges’s survey considering theories of political economy that go beyond simple rent seeking

    The Changing Patterns of Internet Usage

    Get PDF
    Symposium: Essays from Time Warner Cable\u27s Research Program on Digital Communications

    The Post-Chicago Antitrust Revolution: A Retrospective

    Get PDF
    A symposium examining the contributions of the post-Chicago School provides an appropriate opportunity to offer some thoughts on both the past and the future of antitrust. This afterword reviews the excellent papers with an eye toward appreciating the contributions and limitations of both the Chicago School, in terms of promoting the consumer welfare standard and embracing price theory as the preferred mode of economic analysis, and the post-Chicago School, with its emphasis on game theory and firm-level strategic conduct. It then explores two emerging trends, specifically neo-Brandeisian advocacy for abandoning consumer welfare as the sole goal of antitrust and the increasing emphasis on empirical analyses. Antitrust law & policy, competition, political economy, economic theory, empiricism, intellectual history, consumer welfare, economic efficiency, price theory, Chicago & Harvard School

    Network Neutrality after Comcast: Toward a Case-by-Case Approach to Reasonable Network Management

    Get PDF
    The Federal Communications Commission’s recent Comcast decision has rejected categorical, ex ante restrictions on Internet providers’ ability to manage their networks in favor of a more flexible approach that examines each dispute on a case-by-case basis, as I have long advocated. This book chapter, written for a conference held in February 2009, discusses the considerations that a case-by-case approach should take into account. First, allowing the network to evolve will promote innovation by allowing the emergence of applications that depend on a fundamentally different network architecture. Indeed, as the universe of Internet users and applications becomes more heterogeneous, it is only natural for the services that networks provide to diversify in response. Allowing prioritized services would also benefit consumers by allowing them to purchase only the level of service that they need. More diverse business relationships would also allow the network to reflect the insights of two-sided markets, which suggest that the money flowing through the network will often vary in magnitude and direction over time. Any mandated access regime would also confront substantial implementation difficulties and would raise the capital costs of deploying new network facilities. Lastly, a case-by-case approach to network neutrality would provide better ex ante guidance if it incorporated the jurisprudence developed by the Supreme Court applying the rule of reason under the antitrust laws

    The Changing Patterns of Internet Usage

    Get PDF
    The Internet unquestionably represents one of the most important technological developments in recent history. It has revolutionized the way people communicate with one another and obtain information and created an unimaginable variety of commercial and leisure activities. Interestingly, many members of the engineering community often observe that the current network is ill-suited to handle the demands that end users are placing on it. Indeed, engineering researchers often describe the network as ossified and impervious to significant architectural change. As a result, both the U.S. and the European Commission are sponsoring “clean slate” projects to study how the Internet might be designed differently if it were designed from scratch today. This Essay explores emerging trends that are transforming the way end users are using the Internet and examine their implications both for network architecture and public policy. These trends include Internet protocol video, wireless broadband, cloud computing programmable networking, and pervasive computing and sensor networks. It discusses how these changes in the way people are using the network may require the network to evolve in new directions
    • …
    corecore